On Overfitting Avoidance as Bias

نویسنده

  • David H. Wolpert
چکیده

In supervised learning it is commonly believed that penalizing complex functions helps one avoid "overfitting" functions to data, and therefore improves generalization. It is also commonly believed that cross-validation is an effective way to choose amongst algorithms for fitting functions to data. In a recent paper, Schaffer (1993) presents experimental evidence disputing these claims. The current paper consists of a formal analysis of these contentions of Schaffer's. It proves that his contentions are valid, although some of his experiments must be interpreted with caution.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Data and the Effect of Overfitting Avoidance in Decision Tree Induction

the training data itself to tell us whether it is sparse enough to make unpruned trees preferable. As argued at length in Schaaer, 1992b], training data cannot tell us what bias is appropriate to use in interpreting it. In particular, the sparsity of data depends on the complexity of the true relationship underlying data generation; and it is not data but domain knowledge that can tell us how c...

متن کامل

Stacked Training for Overfitting Avoidance in Deep Networks

When training deep networks and other complex networks of predictors, the risk of overfitting is typically of large concern. We examine the use of stacking, a method for training multiple simultaneous predictors in order to simulate the overfitting in early layers of a network, and show how to utilize this approach for both forward training and backpropagation learning in deep networks. We then...

متن کامل

On overfitting and asymptotic bias in batch reinforcement learning with partial observability

This paper stands in the context of reinforcement learning with partial observability and limited data. In this setting, we focus on the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data), and theoretically show that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk...

متن کامل

A Comparison of the Effectiveness of Cognitive Bias Modification in Real and Placebo Conditions on Attentional Bias and Approach Bias in Opium Abusers

Background & Aim: Inability to control drug use is considered a core aspect of drug dependency. Part of this inability is due to cognitive biases resulting from individuals’ constant usage of drugs. The aim of the present study was to compare the effectiveness of cognitive bias modification in real and placebo conditions on attentional bias and approach bias in opium abusers. Methods: This stud...

متن کامل

Overfitting and Neural Networks: Conjugate Gradient and Backpropagation

Methods for controlling the bias/variance tradeoff typically assume that overfitting or overtraining is a global phenomenon. For multi-layer perceptron (MLP) neural networks, global parameters such as the training time (e.g. based on validation tests), network size, or the amount of weight decay are commonly used to control the bias/variance tradeoff. However, the degree of overfitting can vary...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1993